Goto

Collaborating Authors

 unique solution




Appendices A Proofs in Section 3

Neural Information Processing Systems

As the set of solutions to Eq. (3.4) is a line parallel to the subspace A.2 Proof of Lemma 2 For every θ E, we have Φ θ null= e . The auxiliary algorithm (A.1) can be rewritten in the following vector form Θ Bellman operator H is indifferent, i.e., H ( Q + x) H (Q) E, x E So it is impossible to apply the finite time analysis in the literature to establish the convergence of the iterates to some fix point. Then the following properties hold. Lemma 4.a) implies that (c So the Lemma 4.b) implies c Proposition 2. If M is L-smooth with respect to null null Now let's analyze the iterates generated by the following stochastic approximation scheme for solving We make the following assumptions regarding the function H and its stochastic sample ˆ H . Assumption 4. 1. H A and B . 3. There exist a fixed equivalent class, i.e., x Now we study the last term. Now let's focus on the last term in Notice that the monotonicity of infimal convolution (Lemma 4.a) and Lemma 4.b)) implies By update rule (B.5), we have E[ null null x Let's consider the decreasing stepsize first.


Algorithmic Thinking Theory

Bateni, MohammadHossein, Cohen-Addad, Vincent, Gu, Yuzhou, Lattanzi, Silvio, Meierhans, Simon, Mohri, Christopher

arXiv.org Artificial Intelligence

Initial challenges, such as grade-school mathematics (GSM8K) and standard competition math (MATH dataset), have largely been surmounted, pushing the frontier of AI reasoning toward "grand challenge" problems, such as those found in the International Mathematical Olympiad (IMO). These problems, renowned for their demand for deep insight, creativity, and rigorous proof, expose a fascinating weakness in modern LLMs. While a model's performance on a single attempt (termed pass@1) may be very low, its ability to produce a correct answer within k attempts (pass@k) can be significantly higher. This pass@1 versus pass@k gap, especially pronounced when sampling with high temperature to produce diverse outputs, suggests that models possess a vast, latent capability that is not accessible in a single, high-confidence generation. Interestingly, to recover the full power of the model it is not sufficient to simply use multiple attempts. In fact, even the pass@k metric fails to capture the full story. On the most difficult problems, simply sampling k times and selecting the best answer (e.g., "best-of-32") still yields poor results. For instance, Huang and Yang (2025) report that a best-of-32 baseline on the IMO 2025 problems achieved an accuracy of only 31.6-38.1% for leading models [HY25]. This paradox lies at the heart of our work: the latent capability of LLMs is not merely a matter of selection (finding one correct needle in a haystack of k attempts), but one of synthesis.



A Omitted Proofs

Neural Information Processing Systems

In this section we include all of the proofs omitted from the main body. For the convenience of the reader, we will restate each claim before proceeding with its proof. A.1 Preliminary Proofs We commence with the proof of Proposition 1. Proposition 1. F or any η 0 and at all times t N, the OFTRL optimization problem on Line 3 of Algorithm 1 admits a unique optimal solution (λ Uniqueness follow immediately from strict convexity. In the rest of the proof we focus on the existence part. We start by showing that there exists a point x X whose coordinates are all strictly positive.




Supplementary material

Neural Information Processing Systems

Appendix B proves universal approximation of the Neural CDE model, and is substantially more technical than the rest of this paper. Appendix C proves that the Neural CDE model subsumes alternative ODE models which depend directly and nonlinearly on the data. Appendix D gives the full details of every experiment, such as choice of optimiser, hyperparameter searches, and so on. To evaluate the model as discussed in Section 3.2, X must be at least continuous and piecewise differentiable. A.1 Differentiating with respect to the time points However, there is a technical caveat in the specific case that derivatives with respect to the initial time t A.2 Adaptive step size solvers There is one further caveat that must be considered.


Test-time Diverse Reasoning by Riemannian Activation Steering

Khanh, Ly Tran Ho, Zhu, Dongxuan, Yue, Man-Chung, Nguyen, Viet Anh

arXiv.org Artificial Intelligence

Best-of-$N$ reasoning improves the accuracy of language models in solving complex tasks by sampling multiple candidate solutions and then selecting the best one based on some criteria. A critical bottleneck for this strategy is the output diversity limit, which occurs when the model generates similar outputs despite stochastic sampling, and hence recites the same error. To address this lack of variance in reasoning paths, we propose a novel unsupervised activation steering strategy that simultaneously optimizes the steering vectors for multiple reasoning trajectories at test time. At any synchronization anchor along the batch generation process, we find the steering vectors that maximize the total volume spanned by all possible intervened activation subsets. We demonstrate that these steering vectors can be determined by solving a Riemannian optimization problem over the product of spheres with a log-determinant objective function. We then use a Riemannian block-coordinate descent algorithm with a well-tuned learning rate to obtain a stationary point of the problem, and we apply these steering vectors until the generation process reaches the subsequent synchronization anchor. Empirical evaluations on popular mathematical benchmarks demonstrate that our test-time Riemannian activation steering strategy outperforms vanilla sampling techniques in terms of generative diversity and solution accuracy.